Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Automated feedback can provide students with timely information about their writing, but students' willingness to engage meaningfully with the feedback to revise their writing may be influenced by their perceptions of its usefulness. We explored the factors that may have influenced 339, 8th-grade students’ perceptions of receiving automated feedback on their writing and whether their perceptions impacted their revisions and writing improvement. Using HLM and logistic regression analyses, we found that: 1) students with more positive perceptions of the automated feedback made revisions that resulted in significant improvements in their writing, and 2) students who received feedback indicating they included more important ideas in their essays had significantly higher perceptions of the usefulness of the feedback, but were significantly less likely to engage in substantive revisions. Implications and the importance of helping students evaluate and reflect on the feedback to make substantive revisions, no matter their initial feedback, are discussedmore » « lessFree, publicly-accessible full text available June 9, 2026
- 
            As use of artificial intelligence (AI) has increased, concerns about AI bias and discrimination have been growing. This paper discusses an application called PyrEval in which natural language processing (NLP) was used to automate assessment and pro- vide feedback on middle school science writing with- out linguistic discrimination. Linguistic discrimination in this study was operationalized as unfair assess- ment of scientific essays based on writing features that are not considered normative such as subject- verb disagreement. Such unfair assessment is espe- cially problematic when the purpose of assessment is not assessing English writing but rather assessing the content of scientific explanations. PyrEval was implemented in middle school science classrooms. Students explained their roller coaster design by stat- ing relationships among such science concepts as potential energy, kinetic energy and law of conser- vation of energy. Initial and revised versions of sci- entific essays written by 307 eighth- grade students were analyzed. Our manual and NLP assessment comparison analysis showed that PyrEval did not pe- nalize student essays that contained non-normative writing features. Repeated measures ANOVAs and GLMM analysis results revealed that essay quality significantly improved from initial to revised essays after receiving the NLP feedback, regardless of non- normative writing features. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available May 25, 2026
- 
            As use of artificial intelligence (AI) has increased, concerns about AI bias and discrimination have been growing. This paper discusses an application called PyrEval in which natural language processing (NLP) was used to automate assessment and pro- vide feedback on middle school science writing with- out linguistic discrimination. Linguistic discrimination in this study was operationalized as unfair assess- ment of scientific essays based on writing features that are not considered normative such as subject- verb disagreement. Such unfair assessment is espe- cially problematic when the purpose of assessment is not assessing English writing but rather assessing the content of scientific explanations. PyrEval was implemented in middle school science classrooms. Students explained their roller coaster design by stat- ing relationships among such science concepts as potential energy, kinetic energy and law of conser- vation of energy. Initial and revised versions of sci- entific essays written by 307 eighth- grade students were analyzed. Our manual and NLP assessment comparison analysis showed that PyrEval did not pe- nalize student essays that contained non-normative writing features. Repeated measures ANOVAs and GLMM analysis results revealed that essay quality significantly improved from initial to revised essays after receiving the NLP feedback, regardless of non- normative writing features. Findings and implications are discussed.more » « lessFree, publicly-accessible full text available May 25, 2026
- 
            Automated methods are becoming increasingly used to support formative feedback on students’ science explanation writing. Most of this work addresses students’ responses to short answer questions. We investigate automated feedback on students’ science explanation essays, which discuss multiple ideas. Feedback is based on a rubric that identifies the main ideas students are prompted to include in explanatory essays about the physics of energy and mass. We have found that students revisions generally improve their essays. Here, we focus on two factors that affect the accuracy of the automated feedback. First, learned representations of the six main ideas in the rubric differ with respect to their distinctiveness from each other, and therefore the ability of automated methods to identify them in student essays. Second, sometimes a student’s statement lacks sufficient clarity for the automated tool to associate it more strongly with one of the main ideas above all others.more » « less
- 
            Automated writing evaluation (AWE) systems automatically assess and provide students with feedback on their writing. Despite learning benefits, students may not effectively interpret and utilize AI-generated feedback, thereby not maximizing their learning outcomes. A closely related issue is the accuracy of the systems, that students may not understand, are not perfect. Our study investigates whether students differentially addressed false positive and false negative AI-generated feedback errors on their science essays. We found that students addressed nearly all the false negative feedback; however, they addressed less than one-fourth of the false positive feedback. The odds of addressing a false positive feedback was 99% lower than addressing a false negative feedback, representing significant missed opportunities for revision and learning. We discuss the implications of these findings in the context of students’ learning.more » « less
- 
            Hoadley, C; Wang, XC (Ed.)The present study examined teachers’ conceptualization of the role of AI in addressing inequity. Grounded in speculative design and education, we examined eight secondary public teachers’ thinking about AI in teaching and learning that may go beyond present horizons. Data were collected from individual interviews. Findings suggest that not only equity consciousness but also present engagement in a context of inequities were crucial to future dreaming of AI that does not harm but improve equity.more » « less
- 
            Hoadley, C; Wang, XC (Ed.)In this paper, we present a case study of designing AI-human partnerships in a realworld context of science classrooms. We designed a classroom environment where AI technologies, teachers and peers worked synergistically to support students’ writing in science. In addition to an NLP algorithm to automatically assess students’ essays, we also designed (i) feedback that was easier for students to understand; (ii) participatory structures in the classroom focusing on reflection, peer review and discussion, and (iii) scaffolding by teachers to help students understand the feedback. Our results showed that students improved their written explanations, after receiving feedback and engaging in reflection activities. Our case study illustrates that Augmented Intelligence (USDoE, 2023), in which the strengths of AI complement the strengths of teachers and peers, while also overcoming the limitations of each, can provide multiple forms of support to foster learning and teaching.more » « less
- 
            Clarke_Midura, J; Kollar, I; Gu, X; DAngelo, C (Ed.)This study investigates small group collaborative learning with a technologysupported environment. We aim to reveal key aspects of collaborative learning by examining variations in interaction, the influence of small group collaboration on science knowledge integration, and the implications for individual knowledge mastery. Results underscore the importance of high-quality science discourse and user-friendly tools. The study also highlights that group-level negotiations may not always affect individual understanding. Overall, this research offers insights into the complexities of collaboration and its impact on science learning.more » « less
- 
            Hoadley, C; Wang, XC (Ed.)Eighth grade students received automated feedback from PyrEval - an NLP tool - about their science essays. We examined essay quality change when revised. Regardless of prior physics knowledge, essay quality improved. Grounded in literature on AI explainability and trust in automated feedback, we also examined which PyrEval explanation predicted essay quality change. Essay quality improvement was predicted by high- and medium-accuracy feedback.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available